Journals
  Publication Years
  Keywords
Search within results Open Search
Please wait a minute...
For Selected: Toggle Thumbnails
Top- k high average utility sequential pattern mining algorithm under one-off condition
Keshuai YANG, Youxi WU, Meng GENG, Jingyu LIU, Yan LI
Journal of Computer Applications    2024, 44 (2): 477-484.   DOI: 10.11772/j.issn.1001-9081.2023030268
Abstract140)   HTML1)    PDF (519KB)(60)       Save

To address the issue that traditional Sequential Pattern Mining (SPM) does not consider pattern repetition and ignores the effects of utility (unit price or profit) and pattern length on user interest, a Top-k One-off high average Utility sequential Pattern mining (TOUP) algorithm was proposed. The TOUP algorithm mainly includes two core steps: average utility calculation and candidate pattern generation. Firstly, a CSP (Calculation Support of Pattern) algorithm based on the occurrence position of each item and the item repetition relation array was proposed to calculate pattern support, thereby achieving rapid calculation of the average utility of patterns. Secondly, candidate patterns were generated by itemset extension and sequence extension, and a maximum average utility upper bound was proposed. Based on this upper bound, effective pruning of candidate patterns was achieved. Experimental results on five real datasets and one synthetic dataset show that compared to the TOUP-dfs and HAOP-ms algorithms, TOUP algorithm reduces the number of candidate patterns by 38.5% to 99.8% and 0.9% to 77.6%, respectively, and decreases the running time by 33.6% to 97.1% and 57.9% to 97.2%, respectively. Therefore, the algorithm performance of TOUP is better, and it can mine patterns of interests to users more efficiently.

Table and Figures | Reference | Related Articles | Metrics
Prediction of taxi demands between urban regions by fusing origin-destination spatial-temporal correlation
Yuan WEI, Yan LIN, Shengnan GUO, Youfang LIN, Huaiyu WAN
Journal of Computer Applications    2023, 43 (7): 2100-2106.   DOI: 10.11772/j.issn.1001-9081.2022091364
Abstract169)   HTML5)    PDF (1507KB)(240)       Save

Accurate prediction of taxi demands between urban regions can provide decision support information for taxi guidance and scheduling as well as passenger travel recommendation, so as to optimize the relation between taxi supply and demand. However, most of the existing models only focus on modeling and predicting the taxi demands within a region, do not consider enough the spatial-temporal correlation between regions, and pay less attention to the more fine-grained demand prediction between regions. To solve the above problems, a prediction model for taxi demands between urban regions — Origin-Destination fusion with Spatial-Temporal Network (ODSTN) model was proposed. In this model, complex spatial-temporal correlations between regions was captured from spatial dimensions of the regions and region pairs respectively and three temporal dimensions of recent, daily and weekly periods by using graph convolution and attention mechanism, and a new path perception fusion mechanism was designed to combine the multi-angle features and finally realize the taxi demand prediction between urban regions. Experiments were carried out on two real taxi order datasets in Chengdu and Manhattan. The results show that the Mean Absolute Error (MAE), Root Mean Square Error (RMSE) and Mean Absolute Percentage Error (MAPE) of ODSTN model are 0.897 1, 3.527 4, 50.655 6% and 0.589 6, 1.163 8, 61.079 4%, respectively, indicating that ODSTN model has high accuracy in taxi demand prediction tasks.

Table and Figures | Reference | Related Articles | Metrics
Multi-channel pathological image segmentation with gated axial self-attention
Zhi CHEN, Xin LI, Liyan LIN, Jing ZHONG, Peng SHI
Journal of Computer Applications    2023, 43 (4): 1269-1277.   DOI: 10.11772/j.issn.1001-9081.2022030333
Abstract326)   HTML6)    PDF (4014KB)(117)       Save

In Hematoxylin-Eosin (HE)-stained pathological images, the uneven distribution of cell staining and the diversity of various tissue morphologies bring great challenges to automated segmentation. Traditional convolutions cannot capture the correlation features between pixels in a large neighborhood, making it difficult to further improve the segmentation performance. Therefore, a Multi-Channel Segmentation Network with gated axial self-attention (MCSegNet) model was proposed to achieve accurate segmentation of nuclei in pathological images. In the proposed model, a dual-encoder and decoder structure was adopted, in which the axial self-attention encoding channel was used to capture global features, while the convolutional encoding channel based on residual structure was used to obtain local fine features. The feature representation was enhanced by feature fusion at the end of the encoding channel, providing a good information base for the decoder. And in the decoder, segmentation results were gradually generated by cascading multiple upsampling modules. In addition, the improved hybrid loss function was used to alleviate the common problem of sample imbalance in pathological images effectively. Experimental results on MoNuSeg2020 public dataset show that the improved segmentation method is 2.66 percentage points and 2.77 percentage points higher than U-Net in terms of F1-score and Intersection over Union (IoU) indicators, respectively, and effectively improves the pathological image segmentation effect and the reliability of clinical diagnosis.

Table and Figures | Reference | Related Articles | Metrics
Repair method for process models with concurrent structures based on token replay
Erjing BAI, Xiaoyan LI, Yuyue DU
Journal of Computer Applications    2023, 43 (2): 499-506.   DOI: 10.11772/j.issn.1001-9081.2021122154
Abstract228)   HTML2)    PDF (3299KB)(76)       Save

Process mining can build process model according to event logs generated by enterprise information management system. There always exist some deviations between the process model and event logs when the actual business process changes. At this time, the process model needs to be repaired. For the process model with concurrent structures, the precision of some existing repairing methods will be reduced because of the addition of self-loops and invisible transitions. Therefore, a method for repairing process models with concurrent structures was proposed on the basis of logic Petri net and token replay. Firstly, according to the relationship between the input-output places of the sub-model and event logs, the insertion position of the sub-model was determined. Then, the deviation positions were determined by a token replay method. Finally, a method was designed to repair the process models based on logical Petri net. The correctness and effectiveness of this method were verified by carrying out simulations on ProM platform, and the proposed method was compared with Fahland’s and other methods. The results show that the precision of this method is about 85%, which is increased by 17 and 11 percentage points respectively compared with those of Fahland’s and Goldratt methods, In the terms of simplicity, the proposed method does not add any self-loop or invisible transition, while Fahland’s and Goldratt methods add some self-loops and invisible transitions. All of the fitting degrees of the three methods are above 0.9, and the fitting degree of Goldratt method is slightly lower. The above verifies that the model repaired by the proposed method has higher fitness and precision.

Table and Figures | Reference | Related Articles | Metrics
Multi-objective optimization model for unmanned aerial vehicles trajectory based on decomposition and trajectory search
Junyan LIU, Feibo JIANG, Yubo PENG, Li DONG
Journal of Computer Applications    2023, 43 (12): 3806-3815.   DOI: 10.11772/j.issn.1001-9081.2022121882
Abstract157)   HTML3)    PDF (1873KB)(125)       Save

The traditional Deep Learning (DL)-based multi-objective solvers have the problems of low model utilization and being easy to fall into the local optimum. Aiming at these problems, a Multi-objective Optimization model for Unmanned aerial vehicles Trajectory based on Decomposition and Trajectory search (DTMO-UT) was proposed. The proposed model consists of the encoding and decoding parts. First, a Device encoder (Dencoder) and a Weight encoder (Wencoder) were contained in the encoding part, which were used to extract the state information of the Internet of Things (IoT) devices and the features of the weight vectors. And the scalar optimization sub-problems that were decomposed from the Multi-objective Optimization Problem (MOP) were represented by the weight vectors. Hence, the MOP was able to be solved by solving all the sub-problems. The Wencoder was able to encode all sub-problems, which improved the utilization of the model. Then, the decoding part containing the Trajectory decoder (Tdecoder) was used to decode the encoding features to generate the Pareto optimal solutions. Finally, to alleviate the phenomenon of greedy strategy falling into the local optimum, the trajectory search technology was added in trajectory decoder, that was generating multiple candidate trajectories and selecting the one with the best scalar value as the Pareto optimal solution. In this way, the exploration ability of the trajectory decoder was enhanced during trajectory planning, and a better-quality Pareto set was found. The results of simulation experiments show that compared with the mainstream DL MOP solvers, under the condition of 98.93% model parameter quantities decreasing, the proposed model reduces the distribution of MOP solutions by 0.076%, improves the ductility of the solutions by 0.014% and increases the overall performance by 1.23%, showing strong ability of practical trajectory planning of DTMO-UT model.

Table and Figures | Reference | Related Articles | Metrics
Contrast order-preserving pattern mining algorithm
Yufei MENG, Youxi WU, Zhen WANG, Yan LI
Journal of Computer Applications    2023, 43 (12): 3740-3746.   DOI: 10.11772/j.issn.1001-9081.2022121828
Abstract186)   HTML2)    PDF (909KB)(100)       Save

Aiming at the problem that the existing contrast sequential pattern mining methods mainly focus on character sequence datasets and are difficult to be applied to time series datasets, a new Contrast Order-preserving Pattern Mining (COPM) algorithm was proposed. Firstly, in the candidate pattern generation stage, a pattern fusion strategy was used to reduce the number of candidate patterns. Then, in the pattern support calculation stage, the support of super-pattern was calculated by using the matching results of sub-patterns. Finally, a dynamic pruning strategy of minimum support threshold was designed to further effectively prune the candidate patterns. Experimental results show that on six real time series datasets, the memory consumption of COPM algorithm is at least 52.1% lower than that of COPM-o (COPM-original) algorithm, 36.8% lower than that of COPM-e (COPM-enumeration) algorithm, and 63.6% lower than that of COPM-p (COPM-prune) algorithm. At the same time, the running time of COPM algorithm is at least 30.3% lower than that of COPM-o algorithm, 8.8% lower than that of COPM-e algorithm and 41.2% lower than that of COPM-p algorithm. Therefore, in terms of algorithm performance, COPM algorithm is superior to COPM-o, COPM-e and COPM-p algorithms. The experimental results verify that COPM algorithm can effectively mine the contrast order-preserving patterns to find the differences between different classes of time series datasets.

Table and Figures | Reference | Related Articles | Metrics
Attribute reduction algorithm based on cluster granulation and divergence among clusters
Yan LI, Bin FAN, Jie GUO
Journal of Computer Applications    2022, 42 (9): 2701-2712.   DOI: 10.11772/j.issn.1001-9081.2021081371
Abstract238)   HTML10)    PDF (3592KB)(73)    PDF(mobile) (654KB)(12)    Save

Attribute reduction is a hot research topic in rough set theory. Most of the algorithms of attribute reduction for continuous data are based on dominance relations or neighborhood relations. However, continuous datasets do not necessarily have dominance relations in attributes. And the attribute reduction algorithms based on neighborhood relations can adjust the granulation degree through neighborhood radius, but it is difficult to unify the radii due to the different dimensions of attributes and the continuous values of radius parameters, resulting in high computational cost of the whole parameter granulation process. To solve this problem, a multi-granularity attribute reduction strategy based on cluster granulation was proposed. Firstly, the similar samples were classified by the clustering method, and the concepts of approximate set, relative positive region and positive region reduction based on clustering were proposed. Secondly, according to JS (Jensen-Shannon) divergence theory, the difference of data distribution of each attribute among clusters was measured, and representative features were selected to distinguish different clusters. Finally, an attribute reduction algorithm was designed using a discernibility matrix. In the proposed algorithm, the attributes were not required to have ordered relations. Different from neighborhood radius, the clustering parameter was discrete, and the dataset was able to be divided into different granulation degrees by adjusting this parameter. Experimental results on UCI and Kent Ridge datasets show that this attribute reduction algorithm can directly deal with continuous data. At the same time, by using this algorithm, the redundant features in the datasets can be removed while maintaining or even improving the classification accuracy by discrete adjustment of the parameters in a small range.

Table and Figures | Reference | Related Articles | Metrics
Facial expression recognition algorithm based on combination of improved convolutional neural network and support vector machine
Guifang QIAO, Shouming HOU, Yanyan LIU
Journal of Computer Applications    2022, 42 (4): 1253-1259.   DOI: 10.11772/j.issn.1001-9081.2021071270
Abstract414)   HTML25)    PDF (1504KB)(205)       Save

In view of the problems of the current Convolutional Neural Network (CNN) using end layer features to recognize facial expression, such as complex model structure, too many parameters and unsatisfactory recognition, an optimization algorithm based on the combination of improved CNN and Support Vector Machine (SVM) was proposed. First, the network model was designed by the idea of continuous convolution to obtain more nonlinear activations. Then, the adaptive Global Average Pooling (GAP) layer was used to replace the fully connected layer in traditional CNN to reduce the network parameters. Finally, in order to improve generalization ability of the model, SVM classifier instead of the traditional Softmax function was used to realize expression recognition. Experimental results show that the proposed algorithm achieves 73.4% and 98.06% recognition accuracy on Fer2013 and CK+ datasets, which is 2.2 percentage points higher than the traditional LeNet-5 algorithm on Fer2013 dataset. Moreover, this network model has simple structure, less parameters and good robustness.

Table and Figures | Reference | Related Articles | Metrics
Fast failure recovery method based on local redundant hybrid code
Jingyu LIU, Qiuxia NIU, Xiaoyan LI, Qiaoshuo SHI, Youxi WU
Journal of Computer Applications    2022, 42 (4): 1244-1252.   DOI: 10.11772/j.issn.1001-9081.2021111917
Abstract377)   HTML7)    PDF (926KB)(59)       Save

The parity blocks of the Maximum-Distance-Separable (MDS) code are all global parity blocks. The length of the reconstruction chain increases with the expansion of the storage system, and the reconstruction performance gradually decreases. Aiming at the above problems, a new type of Non-Maximum-Distance-Separable (Non-MDS) code called local redundant hybrid code Code-LM(sc) was proposed. Firstly, two types of local parity blocks called horizontal parity block in the strip-set and horizontal-diagonal parity block were added in any strip-sets to reduce the length of the reconstruction chain, and the parity layout of the local redundant hybrid code was designed. Then, four reconstruction formulations of the lost data blocks were designed according to the generation rules of the parity blocks and the common block existed in the reconstruction chains of different data blocks. Finally, double-disk failures were divided into three situations depending on the distances of the strip-sets where the failed disks located and the corresponding reconstruction methods were designed. Theoretical analysis and experimental results show that with the same storage scale, compared with RDP (Row-Diagonal Parity), the reconstruction time of CodeM(sc) for single-disk failure and double-disk failure can be reduced by 84% and 77% respectively; compared with V2-Code, the reconstruction time of Code-LM(sc) for single-disk failure and double-disk failure can be reduced by 67% and 73% respectively. Therefore, local redundant hybrid code can support fast recovery from failed disks and improve reliability of storage system.

Table and Figures | Reference | Related Articles | Metrics
Semantic segmentation of RGB-D indoor scenes based on attention mechanism and pyramid fusion
Na YU, Yan LIU, Xiongju WEI, Yuan WAN
Journal of Computer Applications    2022, 42 (3): 844-853.   DOI: 10.11772/j.issn.1001-9081.2021030392
Abstract404)   HTML18)    PDF (1447KB)(164)       Save

Aiming at the issue of ineffective fusion of multi-modal features of indoor scene semantic segmentation using RGB-D, a network named APFNet (Attention mechanism and Pyramid Fusion Network) was proposed, in which attention mechanism fusion module and pyramid fusion module were designed. To fully use the complementarity of the RGB features and the Depth features, the attention allocation weights of these two kinds of features were respectively extracted by the attention mechanism fusion module, making the network focus more on the multi-modal feature domain with more information content. Local and global information were fused by pyramid fusion module with four different scales of pyramid features, thus scene context was extracted and segmentation accuracies of object edges and small-scale objects were improved. By integrating these two fusion modules into a three-branch “encoder-decoder” network, an “end-to-end” output was realized. Comarative experiments were implemented with the state-of-the-art methods, such as multi-level RGB-D residual feature Fusion network (RDF-152), Attention Complementary features Network (ACNet) and Spatial information Guided convolution Network (SGNet) on the SUN RGB-D and NYU Depth v2 datasets. Compared with the best-performing method RDF-152, when the layer number of the encoder network was reduced from 152 to 50, the Pixel Accuracy (PA), Mean Pixel Accuracy (MPA), and Mean Intersection over Union (MIoU) of APFNet were respectively increased by 0.4, 1.1 and 3.2 percentage points. The semantic segmentation accuracies for small-scale objects such as pillows and photos, and large-scale objects such as boards and ceilings were increased by 0.9 to 3.4 and 12.4 to 18 percentage points respectively. The results show that the proposed APFNet has some advantages in dealing with the semantic segmentation of indoor scenes.

Table and Figures | Reference | Related Articles | Metrics
Voting instance selection algorithm based on learning to hash
Yajie HUANG, Junhai ZHAI, Xiang ZHOU, Yan LI
Journal of Computer Applications    2022, 42 (2): 389-394.   DOI: 10.11772/j.issn.1001-9081.2021071188
Abstract311)   HTML20)    PDF (574KB)(109)       Save

With the massive growth of data, how to store and use data has become a hot issue in academic research and industrial applications. As one of the methods to solve these problems, instance selection effectively reduces the difficulty of follow-up work by selecting representative instances from original data according to the established rules. Therefore, a voting instance selection algorithm based on learning to hash was proposed. Firstly, the Principal Component Analysis (PCA) method was used to map high-dimensional data to low-dimensional space. Secondly, the k-means algorithm was used to perform iterative operations by combining with the vector quantization method, and the hash codes of the cluster center were used to represent the data. After that, the classified data were randomly selected according to the proportion, and the final instances were selected by voting after several times independent running of the algorithm. Compared with the Compressed Nearest Neighbor (CNN) algorithm and the instance selection algorithm of linear complexity for big data named LSH-IS-F (Instance Selection algorithm by Hashing with two passes), the proposed algorithm has the compression ratio improved by an average of 19%. The idea of the proposed algorithm is simple and easy to implement, and the algorithm can control the compression ratio automatically by adjusting the parameters. Experimental results on 7 datasets show that the proposed algorithm has a great advantage compared to random hashing in terms of compression ratio and running time with similar test accuracy.

Table and Figures | Reference | Related Articles | Metrics
Feature construction and preliminary analysis of uncertainty for meta-learning
Yan LI, Jie GUO, Bin FAN
Journal of Computer Applications    2022, 42 (2): 343-348.   DOI: 10.11772/j.issn.1001-9081.2021071198
Abstract459)   HTML67)    PDF (483KB)(186)       Save

Meta-learning is the learning process of applying machine learning methods (meta-algorithms) to seek the mapping between features of a problem (meta-features) and relative performance measures of the algorithm, thereby forming the learning process of meta-knowledge. How to construct and extract meta-features is an important research content. Concerning the problem that most of meta-features used in the existing related researches are statistical features of data, uncertainty modeling was proposed and the impact of uncertainty on learning system was studied. Based on inconsistency of data, complexity of boundary, uncertainty of model output, linear capability to be classified, degree of attribute overlap, and uncertainty of feature space, six kinds of uncertainty meta-features were established for data or models. At the same time,the uncertainty size of the learning problem itself was measured from different perspectives, and specific definitions were given. The correlations between these meta-features were analyzed on artificial datasets and real datasets of a large number of classification problems, and multiple classification algorithms such as K-Nearest Neighbor (KNN) were used to conduct a preliminary analysis of the correlation between meta-features and test accuracy. Results show that the average degree of correlation is about 0.8, indicating that these meta-features have a significant impact on learning performance.

Table and Figures | Reference | Related Articles | Metrics
Dynamic adjusting threshold algorithm for virtual machine migration
ZHAO Chun, YAN Lianshan, CUI Yunhe, XING Huanlai, FENG Bin
Journal of Computer Applications    2017, 37 (9): 2547-2550.   DOI: 10.11772/j.issn.1001-9081.2017.09.2547
Abstract650)      PDF (639KB)(454)       Save
Aiming at the optimization of servers' energy consumption in data center and the reasonable migration time of Virtual Machine (VM), a VM migration algorithm based on Dynamic Adjusting Threshold (DAT) was proposed. Firstly, the migration threshold was dynamically adjusted by analyzing the historical load data acquired from Physical Machine (PM), then the time for migrating VMs was decided by the delay trigger mechanism and the PM load trend prediction. The VM migration algorithm based on DAT was tested on datacenter platform in the laboratory. Experimental results indicate that compared with the static threshold method, the number of the shut down PMs of the proposed algorithm is larger, and the energy consumption of the data center is lower. The VM migration algorithm based on DAT can dynamically migrate VMs according to the variation of PM load, thus improving the utilization of resources and the efficiency of VM migration, reducing the energy consumption of the data center.
Reference | Related Articles | Metrics
Data combination method based on structure's granulation
YAN Lin, LIU Tao, YAN Shuo, LI Feng, RUAN Ning
Journal of Computer Applications    2015, 35 (2): 358-363.   DOI: 10.11772/j.issn.1001-9081.2015.02.0358
Abstract420)      PDF (1014KB)(331)       Save

In order to study the problem about data combinations occurring in real life, different kinds of data information were combined together, leading to a structure called associated-combinatorial structure. Actually, the structure was constituted by a data set, an associated relation and a partition. The aim was to use the structure to set up a method of data combination. To this end, the associated-combinatorial structure was transformed into a granulation structure by granulating the associated relation. In this process, data combinations were completed in accordance with the data classifications. Moreover, because an associated-combinatorial structure or a granulation structure could be represented by the associated matrix, the transformation from a structure to another structure was characterized by algebraic calculations determined by matrix transformations. Therefore, the research not only involved theoretical analysis for the data combination, but also established the data processing method connected with matrix transformations. Accordingly, a computer program with linear complexity was formulated according to the data combinations method. The experimental result proves that the program is accurate and fast.

Reference | Related Articles | Metrics
Analysis on distinguishing product reviews based on top- k emerging patterns
LIU Lu, WANG Yining, DUAN Lei, NUMMENMAA Jyrki, YAN Li, TANG Changjie
Journal of Computer Applications    2015, 35 (10): 2727-2732.   DOI: 10.11772/j.issn.1001-9081.2015.10.2727
Abstract499)      PDF (994KB)(374)       Save
With the development of e-commerce, online shopping Web sites provide reviews for helping a customer to make the best choice. However, the number of reviews is huge, and the content of reviews is typically redundant and non-standard. Thus, it is difficult for users to go through all reviews in a short time and find the distinguishing characteristics of a product from the reviews. To resolve this problem, a method to mine top- k emerging patterns was proposed and applied to mining reviews of different products. Based on the proposed method, a prototype, called ReviewScope, was designed and implemented. ReviewScope can find significant comments of certain goods as decision basis, and provide visualization results. The case study on real world data set of JD.com demonstrates that ReviewScope is effective, flexible and user-friendly.
Reference | Related Articles | Metrics
PM2.5 concentration prediction model of least squares support vector machine based on feature vector
LI Long MA Lei HE Jianfeng SHAO Dangguo YI Sanli XIANG Yan LIU Lifang
Journal of Computer Applications    2014, 34 (8): 2212-2216.   DOI: 10.11772/j.issn.1001-9081.2014.08.2212
Abstract472)      PDF (781KB)(1156)       Save

To solve the problem of Fine Particulate Matter (PM2.5) concentration prediction, a PM2.5 concentration prediction model was proposed. First, through introducing the comprehensive meteorological index, the factors of wind, humidity, temperature were comprehensively considered; then the feature vector was conducted by combining the actual concentration of SO2, NO2, CO and PM10; finally the Least Squares Support Vector Machine (LS-SVM) prediction model was built based on feature vector and PM2.5 concentration data. The experimental results using the data from the city A and city B environmental monitoring centers in 2013 show that, the forecast accuracy is improved after the introduction of a comprehensive weather index, error is reduced by nearly 30%. The proposed model can more accurately predict the PM2.5 concentration and it has a high generalization ability. Furthermore, the author analyzed the relationship between PM2.5 concentration and the rate of hospitalization, hospital outpatient service amount, and found a high correlation between them.

Reference | Related Articles | Metrics
Design of live video streaming, recording and storage system based on Flex, Red5 and MongoDB
ZHEN Jingjing YE Yan LIU Taijun DAI Cheng WANG Honglai
Journal of Computer Applications    2014, 34 (2): 589-592.  
Abstract614)      PDF (632KB)(728)       Save
In order to improve the conventional situation that network video does not play smoothly during live or on-demand and find storage strategy of mass video data, this paper presented an overall design scheme of a real-time live video recording and storage system. The open source streaming media server Red5 and the rich Internet application technology Flex were utilized to achieve live video streaming and recording. The recorded video data would be stored in the open source NoSQL database MongoDB. The experimental results illustrate that the platform can meet requirements of multi-user access and data storage.〖JP〗
Related Articles | Metrics
High-speed data acquisition and transmission system for low-energy X-ray industrial CT
YANG Lei GAOFuqiang LI Ling CHEN Yan LI Ren
Journal of Computer Applications    2014, 34 (11): 3361-3364.   DOI: 10.11772/j.issn.1001-9081.2014.11.3361
Abstract253)      PDF (623KB)(510)       Save

To meet the application demand of high speed scanning and massive data transmission in industrial Computed Tomography (CT) of low-energy X-ray, a system of high-speed data acquisition and transmission for low-energy X-ray industrial CT was designed. X-CARD 0.2-256G of DT company was selected as the detector. In order to accommodate the needs of high-speed analog to digital conversion, high-speed time division multiplexing circuit and ping-pong operation for the data cache were combined; a gigabit Ethernet design was conducted with Field Programmable Gate Array (FPGA) selected as the master chip,so as to meet the requirements of high-speed transmission of multi-channel data. The experimental result shows that the speed of data acquisition system reaches 1MHz, the transmission speed reaches 926Mb/s and the dynamic range is greater than 5000. The system can effectively shorten the scanning time of low energy X-ray detection, which can meet the requirements of data transmission of more channels.

Reference | Related Articles | Metrics
Query algorithm based on mesh structure in large-scale smart grid
WANG Yan HAO Xiuping SONG Baoyan LI Xuecheng XING Zengwei
Journal of Computer Applications    2014, 34 (11): 3126-3130.   DOI: 10.11772/j.issn.1001-9081.2014.11.3126
Abstract198)      PDF (841KB)(491)       Save

Currently, the query of transmission lines monitoring system in smart grid is mostly aiming at the global query of Wireless Sensor Network (WSN), which cannot satisfy the flexible and efficient query requirements based on any area. The layout and query characteristics of network were analyzed in detail, and a query algorithm based on mesh structure in large-scale smart grid named MSQuery was proposed. The algorithm aggregated the data of query nodes within different grids to one or more logical query trees, and an optimized path of collecting query result was built by the merging strategy of the logical query tree. Experiments were conducted among MSQuery, RSA which used routing structure for querying and SkySensor which used cluster structure for querying. The simulation results show that MSQuery can quickly return the query results in query window, reduce the communication cost, and save the energy of sensor nodes.

Reference | Related Articles | Metrics
Nonlinear modeling of power amplifier based on improved radial basis function networks
LI Ling LIU Taijun YE Yan LIN Wentao
Journal of Computer Applications    2014, 34 (10): 2904-2907.   DOI: 10.11772/j.issn.1001-9081.2014.10.2904
Abstract257)      PDF (535KB)(357)       Save

Aiming at the nonlinear modeling of Power Amplifier (PA), an improved Radial Basis Function Neural Networks (RBFNN) model was proposed. Firstly, time-delay of cross terms and output feedback were added in the input. Parameters (weigths and centers) of the proposed model were extracted using the Orthogonal Least Square (OLS) algorithm. Then Doherty PA was trained and validated successfully by 15MHz three-carrier Wideband Code Division Multiple Access (WCDMA) signal, and the Normalized Mean Square Error (NMSE) can reach -45dB. Finally, the inverse class F power amplifier was used to test the universality of the model. The simulation results show that the model can more truly fit characteristics of power amplifier.

Reference | Related Articles | Metrics
Algorithm of optimal surface deployment in wireless sensor networks
LI Yingfang YAN Li YANG Bo
Journal of Computer Applications    2013, 33 (10): 2730-2733.  
Abstract616)      PDF (608KB)(657)       Save
Node deployment is a basic problem in sensor networks, which directly relates to the performance of the entire network. Most existing researches on sensor network node deployment are for the case of twodimensional planar and three dimensions space, but very few researches for threedimensional surface deployment scenario. This paper proposed an algorithm of optimal surface deployment in wireless sensor networks. First by mathematical or differential geometry method for threedimensional surface it constructed mathematical model, and then through the centroid of the threedimensional surface Voronoi subdivision partitions, an error function was proposed to evaluate the superiority of deployment method. Finally compared with other surface deployment methods, the performance of the proposed algorithm in this paper is superior.
Related Articles | Metrics
Digital watermarking protocol based on El Gamal algorithm
YAN Lixia XIAO Mingbo
Journal of Computer Applications    2013, 33 (09): 2529-2531.   DOI: 10.11772/j.issn.1001-9081.2013.09.2529
Abstract760)      PDF (623KB)(481)       Save
In light of the drawbacks of current digital watermarking protocols, such as requiring frequent involvement of buyers, assuming that buyers' knowledge of signature or watermark, and not considering appropriate usage control of digital products, a secure, practical and extensible watermarking protocol was proposed, by utilizing the homomorphic, commutative El Gamal encryption algorithm and the machine fingerprint-based copyright control scheme. Besides the basic functions of the digital watermarking protocol, this protocol also considered the interests of both buyer and seller to some extent, and improved user's experience with a transaction model similar to the traditional one.
Related Articles | Metrics
Optimization algorithm for I-V curve fitting of solar cell
HU Keman HU Haiyan LIU Guiguo
Journal of Computer Applications    2013, 33 (05): 1481-1484.   DOI: 10.3724/SP.J.1087.2013.01481
Abstract843)      PDF (679KB)(633)       Save
A new optimization algorithm, GA-AFSA, was proposed by integrating Genetic Algorithm (GA) and Artificial Fish Swarm Algorithm (AFSA) to fit for the mathematic model of I-V curve of solar cell. It maintained the global optimization advantages of GA and quick convergence of AFSA while overcoming the defects of GA's slow convergence and AFSA's stepping without a definite purpose. By fitting the five important parameters of I-V curve, namely the photo-generated current of solar cell, quality factor of diode, series resistance, reverse saturation current and shunt resistance, GA-AFSA made a great improvement. Compared with the existing algorithm, the new one has a higher precision and a rapid convergence speed.
Reference | Related Articles | Metrics
Detection and defense scheme for selective forwarding attacks in wireless sensor network
FU Xiang-yan LI Ping WU Jia-ying
Journal of Computer Applications    2012, 32 (10): 2711-2715.   DOI: 10.3724/SP.J.1087.2012.02711
Abstract849)      PDF (956KB)(472)       Save
To improve the detection rate for malicious node and the defensive ability of system,a detection method based on optimal random routing algorithm and neighbor node monitoring was proposed, which was against the selective forwarding attack about Wireless Sensor Network (WSN). This method created the forward path by introducing some parameters such as distance, trust degree, etc. At the same time, it also used the node monitor scheme to detect and defend malicious node during the routing discovery and selection process. Simulation was completed at Matlab environment and performance comparison was done with other methods. Analysis and simulation results show that this method is effective for detecting selective forwarding attack and can ensure reliable packets delivery to destination using relatively less energy.
Reference | Related Articles | Metrics
Data storage method supporting large-scale smart grid
SONG Bao-yan ZHANG Hong-mei WANG Yan LI Qiong
Journal of Computer Applications    2012, 32 (09): 2496-2499.   DOI: 10.3724/SP.J.1087.2012.02496
Abstract950)      PDF (848KB)(538)       Save
Concerning that the monitoring data in large-scale smart grid are massive, real-time and dynamic, a new data storage approach supporting large-scale smart grid based on data-centric was proposed, which is a hierarchical extension scheme for storing massive dynamic data. Firstly, the extended Hash coding method could adjust the number of storage nodes dynamically to avoid data loss of sudden or frequent events and increase system availability. Then, the multi-threshold leveling method was used to distribute data to multiple storage nodes, which could avoid hotspot storage problem and achieve load balance. Simulation results show that this method is able to satisfy the need of massive data storage, to obtain better load balance, to lower the total energy consumption and to extend the life cycle of the whole network.
Reference | Related Articles | Metrics
Routing algorithm based on link bandwidth utilization rate
YANG Xiao-qin ZHANG Li-fang CAO Qing-huang SUN Hai-yan LI Zhuo-qing
Journal of Computer Applications    2012, 32 (09): 2422-2425.   DOI: 10.3724/SP.J.1087.2012.02422
Abstract1325)      PDF (618KB)(602)       Save
In order to avoid network congestion, concerning that the current algorithms have not considered the aspects of network flow distribution and user's perception, a routing algorithm which can realize maximum load balance was proposed. The algorithm can decrease the maximal bandwidth utilization rate of network with delay guaranteed. The experiment based on real Abilene2 network topology shows that the proposed algorithm can alleviate the network congestion and improve the network utilization rate over 50% effectively. Compared with the existing algorithms, the proposed algorithm can meet two requirements of the bandwidth utilization and network delay. In addition, by adjusting the parameter values it can meet different business requirements in actual networks.
Reference | Related Articles | Metrics
Improved fast new edge-directed fractional interpolation algorithm
LIU Nan BI Du-yan LIN Jia-hao YANG Zhong-bin
Journal of Computer Applications    2012, 32 (07): 1864-1867.   DOI: 10.3724/SP.J.1087.2012.01864
Abstract1307)      PDF (645KB)(784)       Save
The original New Edge-Directed Interpolation (NEDI) algorithm is of high complexity, difficult for hardware implementation, and the interpolated images may suffer from blurring edges around edge area. To achieve a better subjective quality, an improved NEDI algorithm was proposed in this paper. In the new algorithm, a circular window was adopted, and the interpolation coefficient calculation was calculated only once, which could be reused in interpolating the center-pixels, thus the errors introduced by iterative computation were avoided and the interpolation time was saved. As to non-center pixels, six original neighbors were involved to estimate local covariance characteristics at high resolution. In comparison with the results of bi-cubic interpolation and the traditional NEDI, the experimental results indicate that proposed algorithm can eliminate the sawtooth of the interpolated picture in large-scale, and decrease the computational complexity.
Reference | Related Articles | Metrics
New method for fast image dehazing
MA Jiang-feng YANG Zhong-bin BI Du-yan LI Quan-he
Journal of Computer Applications    2012, 32 (06): 1567-1569.   DOI: 10.3724/SP.J.1087.2012.01567
Abstract860)      PDF (711KB)(455)       Save
No method could be used to get the exact numbers of these variables in the original Koschmieder model, whose solution is an ill-posed problem. Thus, we propose a novel Koschmieder model whose solution is much easier, while the proposed model has something in common with Atmosphere Degradation Model. Then a novel method for fast image dehazing is proposed based on the proposed model, compared with the dehazing method proposed by He, the experimental results yields that out method could realize fast dehazing, while could keep the scene’s color constancy and get the same or even better contrast promotion
Related Articles | Metrics
Task allocation based on ant colony optimization in cloud computing
ZHANG Chun-yan LIU Qing-lin MENG Ke
Journal of Computer Applications    2012, 32 (05): 1418-1420.  
Abstract1392)      PDF (1547KB)(1089)       Save
Concerning the defects of the Ant Colony Optimization (ACO) for the task allocation, a grouping and polymorphic ACO was proposed to improve the service quality. The algorithm, which divided the ants into three groups: searching ants, scouting ants and working ants, with the update of forecast completion time to gradually get the minimum of the average completion time and to decrease the possibility of generation to local optimum, was emulated and achieved with Cloudsim tookit at last. Results of the experiment show that the time of handling requests and tasks of this approach has been reduced and the efficiency of handling tasks gets improved.
Reference | Related Articles | Metrics
Analysis on schedulability of fixed-priority multiprocessor scheduling
BAI Lu YAN Li
Journal of Computer Applications    2012, 32 (03): 603-605.   DOI: 10.3724/SP.J.1087.2012.00603
Abstract1273)      PDF (613KB)(736)       Save
Concerning the Fixed-Priority (FP) algorithm of multiprocessor real-time scheduling, an improved schedulability test was proposed. This paper applied Baruah's window analytical framework of Earliest Deadline First (EDF) to FP, bounded the max number of higher priority tasks doing carry-in by m-1 (with m being the number of processors), and thus got a new upper bound of interference a task suffered. Then, a tighter sufficient condition to determine schedulability was derived. The simulation results show the schedulability test is more efficient by increasing the number of detected schedulable task sets.
Reference | Related Articles | Metrics